31 research outputs found

    Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation

    Full text link
    Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and clinical drive for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage.Comment: 32 pages, 16 figures. Homepage: https://atm22.grand-challenge.org/. Submitte

    Hunting imaging biomarkers in pulmonary fibrosis: Benchmarks of the AIIB23 challenge

    Get PDF
    Airway-related quantitative imaging biomarkers are crucial for examination, diagnosis, and prognosis in pulmonary diseases. However, the manual delineation of airway structures remains prohibitively time-consuming. While significant efforts have been made towards enhancing automatic airway modelling, current public-available datasets predominantly concentrate on lung diseases with moderate morphological variations. The intricate honeycombing patterns present in the lung tissues of fibrotic lung disease patients exacerbate the challenges, often leading to various prediction errors. To address this issue, the 'Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease 2023' (AIIB23) competition was organized in conjunction with the official 2023 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). The airway structures were meticulously annotated by three experienced radiologists. Competitors were encouraged to develop automatic airway segmentation models with high robustness and generalization abilities, followed by exploring the most correlated QIB of mortality prediction. A training set of 120 high-resolution computerised tomography (HRCT) scans were publicly released with expert annotations and mortality status. The online validation set incorporated 52 HRCT scans from patients with fibrotic lung disease and the offline test set included 140 cases from fibrosis and COVID-19 patients. The results have shown that the capacity of extracting airway trees from patients with fibrotic lung disease could be enhanced by introducing voxel-wise weighted general union loss and continuity loss. In addition to the competitive image biomarkers for mortality prediction, a strong airway-derived biomarker (Hazard ratio>1.5, p < 0.0001) was revealed for survival prognostication compared with existing clinical measurements, clinician assessment and AI-based biomarkers

    Fetal Brain Tissue Annotation and Segmentation Challenge Results

    Full text link
    In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.Comment: Results from FeTA Challenge 2021, held at MICCAI; Manuscript submitte

    Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results

    Full text link
    Segmentation is a critical step in analyzing the developing human fetal brain. There have been vast improvements in automatic segmentation methods in the past several years, and the Fetal Brain Tissue Annotation (FeTA) Challenge 2021 helped to establish an excellent standard of fetal brain segmentation. However, FeTA 2021 was a single center study, and the generalizability of algorithms across different imaging centers remains unsolved, limiting real-world clinical applicability. The multi-center FeTA Challenge 2022 focuses on advancing the generalizability of fetal brain segmentation algorithms for magnetic resonance imaging (MRI). In FeTA 2022, the training dataset contained images and corresponding manually annotated multi-class labels from two imaging centers, and the testing data contained images from these two imaging centers as well as two additional unseen centers. The data from different centers varied in many aspects, including scanners used, imaging parameters, and fetal brain super-resolution algorithms applied. 16 teams participated in the challenge, and 17 algorithms were evaluated. Here, a detailed overview and analysis of the challenge results are provided, focusing on the generalizability of the submissions. Both in- and out of domain, the white matter and ventricles were segmented with the highest accuracy, while the most challenging structure remains the cerebral cortex due to anatomical complexity. The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms. The resulting new methods contribute to improving the analysis of brain development in utero.Comment: Results from FeTA Challenge 2022, held at MICCAI; Manuscript submitted. Supplementary Info (including submission methods descriptions) available here: https://zenodo.org/records/1062864

    Self-supervised Advanced Deep Learning for Characterization of Brain Tumor Aggressiveness and Prognosis Analysis Through Multimodality MRI Imaging

    No full text
    La detecció precoç, la delimitació automàtica i l'estimació del volum són tasques vitals per a la predicció de la supervivència i la planificació del tractament dels pacients amb tumor cerebral. Tanmateix, els gliomes sovint són difícils de localitzar i delimitar amb la segmentació manual convencional a causa de la seva gran variació de forma, ubicació i aparença. A més, la delimitació manual de marques és un treball laboriós i que requereix temps per a un neurocirurgià. A més, és difícil replicar els resultats de la segmentació a causa de certs factors de funcionament pràctic. En els últims anys, les xarxes neuronals de convolució (CNN) s'utilitzen àmpliament per a la classificació i segmentació automatitzada d'imatges mèdiques. Per tant, l'objectiu de la present tesi és desenvolupar un sistema per automatitzar l'anàlisi de tumors cerebrals (com la segmentació de tumors cerebrals i la predicció de supervivència), utilitzant tècniques d'aprenentatge profund i aplicant-les a les imatges de ressonància magnètica per segmentar les classes de tumors cerebrals (Enhancing Tumor). , Tumor no millorant i Edema peritumoral) i estimant els dies de supervivència dels pacients per a l'anàlisi del pronòstic. En aquest estudi, es van dissenyar i provar diversos models d'aprenentatge profund basats en 2D i 3D per a la segmentació del tumor cerebral multiclasse i la predicció de supervivència dels pacients amb tumor cerebral. Vam proposar un model CNN 2D (BrainSeg-DCANet) i després vam proposar una xarxa residual d'inici profunda 2D multivista (axial, sagital i coronal) per a la segmentació del tumor cerebral. A partir de llavors, es va introduir un aprenentatge contrastiu autosupervisat en dues etapes basat en CNN en 3D mitjançant transformadors CNN basats en l'atenció multiescala en paral·lel per a la segmentació volumètrica del tumor cerebral en 3D. Finalment, es va realitzar una predicció de supervivència basada en imatges de RM en 3D. S'utilitzen tècniques d'extracció de múltiples característiques per extreure les característiques de la imatge de ressonància magnètica volumètrica 3D i després s'apliquen diferents tècniques de regressió a les característiques extretes. Les troballes de la tesi van demostrar que les tècniques proposades poden produir una eina assistida per ordinador clínicament útil per a la segmentació dLa detección temprana, la delimitación automática y la estimación del volumen son tareas vitales para la predicción de la supervivencia y la planificación del tratamiento de los pacientes con tumores cerebrales. Sin embargo, los gliomas a menudo son difíciles de localizar y delinear con la segmentación manual convencional debido a su gran variación de forma, ubicación y apariencia. Además, la delineación manual de marcas es un trabajo laborioso y lento para un neurocirujano. Además, es difícil replicar los resultados de la segmentación debido a ciertos factores prácticos de operación. En los últimos años, las redes neuronales de convolución (CNN) se utilizan ampliamente para la clasificación y segmentación automatizadas de imágenes médicas. Por lo tanto, el objetivo de la presente tesis es desarrollar un sistema para automatizar el análisis de tumores cerebrales (como la segmentación de tumores cerebrales y la predicción de supervivencia), utilizando técnicas de aprendizaje profundo y aplicándolas a las imágenes de resonancia magnética para segmentar las clases de tumores cerebrales (Enhancing Tumor , Tumor sin realce y Edema peritumoral) y estimar los días de supervivencia de los pacientes para el análisis pronóstico. En este estudio, se diseñaron y probaron varios modelos de aprendizaje profundo basados ​​en 2D y 3D para la segmentación de tumores cerebrales de clases múltiples y la predicción de supervivencia de los pacientes con tumores cerebrales. Propusimos un modelo CNN 2D (BrainSeg-DCANet) y luego propusimos una red residual de inicio profundo multivista 2D (axial, sagital y coronal) para la segmentación de tumores cerebrales. A partir de entonces, se introdujo un aprendizaje contrastivo autosupervisado de dos etapas basado en CNN en 3D que utiliza transformadores de CNN basados ​​en la atención de múltiples escalas y vistas múltiples paralelas para la segmentación volumétrica de tumores cerebrales en 3D. Finalmente, se realizó una predicción de supervivencia basada en imágenes de RM 3D. Se utilizan múltiples técnicas de extracción de características para extraer las características de la imagen de resonancia magnética volumétrica 3D y luego se aplican diferentes técnicas de regresión a las características extraídas. Los hallazgos de la tesis mostraron que las técnicas propuestas pueden producir una hEarly detection, automatic delineation, and volume estimation are vital tasks for survival prediction and treatment planning of brain tumor patients. However, gliomas are often difficult to localize and delineate with conventional manual segmentation due to their high variation of shape, location, and appearance. Moreover, manual mark delineation is laborious and time-consuming work for a neurosurgeon. In addition, it is difficult to replicate the segmentation results due to certain practical operation factors. In recent years, convolution neural networks (CNNs) are widely used for the automated classification and segmentation of medical images. Therefore, the focus of the present thesis is to develop a system for automating brain tumor analysis (such as brain tumor segmentation, and survival prediction), using deep learning techniques and applying them to the MRI images for segmenting the brain tumor classes (Enhancing Tumor, Non-enhancing Tumor, and Peritumoral Edema) and estimating the survival days of the patients for prognosis analysis. In this study, various 2D and 3D based deep learning models were designed and tested for the multi-class brain tumor segmentation and survival prediction of the brain tumor patients. We proposed a 2D CNN model (BrainSeg-DCANet) then we proposed a 2D multiview (axial, sagittal, and coronal) deep inception residual network for brain tumor segmentation. Thereafter, a 3D CNN based Two-stage Self-supervised Contrastive Learning using Parallel Multiview Multiscale Attention-based CNN Transformers for 3D brain tumor volumetric segmentation was introduced. Finally, a 3D MR image-based survival prediction was performed. Multiple feature extraction techniques are used to extract the features from the 3D volumetric MRI image and then different regression techniques are applied to the extracted features. The thesis’s findings showed that the proposed techniques can produce a clinically helpful computer-aided tool for brain tumor segmentation and survival prediction by MRI Images

    EEG-based Effective Connectivity Analysis with Graph Theory Approach for Cognitive Load Assessment of Multimedia Learning in Adults

    No full text
    This dissertation presents the cognitive load assessment of multimedia learning content using Electroencephalography (EEG) based effective connectivity approach with graph theory network analysis. Cognitive load assessment during multimedia learning has a key role in understanding the complexity of multimedia-based learning tasks. The widely-used methods for assessing the cognitive load in EEG are the traditional feature extraction techniques such as power spectral density and entropies

    EEG-based Effective Connectivity Analysis with Graph Theory Approach for Cognitive Load Assessment of Multimedia Learning in Adults

    No full text
    This dissertation presents the cognitive load assessment of multimedia learning content using Electroencephalography (EEG) based effective connectivity approach with graph theory network analysis. Cognitive load assessment during multimedia learning has a key role in understanding the complexity of multimedia-based learning tasks. The widely-used methods for assessing the cognitive load in EEG are the traditional feature extraction techniques such as power spectral density and entropies

    Late-Ensemble of Convolutional Neural Networks with Test Time Augmentation for Chest XR COVID-19 Detection

    No full text
    Abstract COVID-19, a severe acute respiratory syndrome aggressively spread among global populations in just a few months. Since then, it has had four dominant variants (Alpha, Beta, Gamma and Delta) that are far more contagious than original. Accurate and timely diagnosis of COVID-19 is critical for analysis of damage to lungs, treatment, as well as quarantine management [7]. CT, MRI or X-rays image analysis using deep learning provide an efficient and accurate diagnosis of COVID-19 that could help to counter its outbreak. With the aim to provide efficient multi-class COVID-19 detection, recently, COVID-19 Detection challenge using X-ray is organized [12]. In this paper, the late-fusion of features is extracted from pre-trained various convolutional neural networks and fine-tuned these models using the challenge dataset. The DensNet201 with Adam optimizer and EffecientNet-B3 are fine-tuned on the challenge dataset and ensembles the features to get the final prediction. Besides, we also considered the test time augmentation technique after the late-ensembling approach to further improve the performance of our proposed solution. Evaluation on Chest XR COVID-19 showed that our model achieved overall accuracy is 95.67%. We made the code is publicly available 1 . The proposed approach was ranked 6th in Chest XR COVID-19 detection Challenge [1]
    corecore